ML Theory Lecture 4 Matus Telgarsky
نویسنده
چکیده
Currently in the course we are showing that we can approximate continuous functions on compact sets with piecewise constant functions; in other words, with boxes. We will finish this today. Remaining in the representation section we still have: polynomial fit of continuous functions, functions we can fit succinctly, and GANs. Remark 1.1 (Homework comment.). A brief comment about problem 2(c); it wasn’t stated clearly enough, so hardly anything was taken off, but it really wanted something about LIL and Hoeffding disagreeing. Something along the lines of there being a lower bound (anti-concentration) infinitely often sufficed. Note one funny thing we can do with Hoeffding. Hoeffding by default holds for a fixed n, but we can instantiate it for each (δn)n 1 with δn : δ/(n(n + 1)). Thus
منابع مشابه
ML Theory Lecture 8 Matus Telgarsky
Let’s briefly recap where we are in the course. • We’re almost done with the “representation” part of the course. Today we’ll establish the key result in the “succinctness” portion: there exist small, deep networks which can not be approximated by shallow networks, even if they are huge. • The next class we’ll talk about GANs and other probability models, which will conclude this “representatio...
متن کاملEECS 598 - 005 : Theoretical Foundations of Machine Learning Fall 2015 Lecture 15 : Neural Networks Theory
متن کامل
ML Theory Lecture 9
Today is the last lecture on representation. We’ve shown that neural nets can fit continuous functions, and also that they can fit some other functions succinctly (with small representation size), but we’ve only looked at univariate outputs! Today we’ll close the topic with a look at a much different representation problem: using machine learning models to approximate probability distributions!...
متن کاملML Theory Lecture 2 Matus
Coherence. In this problem we are considering a curious setup where past and future learning are not distinct; instead, vectors x ∈ Rd just keep coming, along with correct labels y ∈ {−1,+1}, and we can learn forever if we wish. Coherence will be provided in the following interesting way. There will be a fixed vector u ∈ Rd and scalar γ > 0 (“fixed” means: fixed across all time) so that every p...
متن کامل